Goto

Collaborating Authors

 Eastern Region


A Cookbook for Community-driven Data Collection of Impaired Speech in LowResource Languages

Salihs, Sumaya Ahmed, Wiafe, Isaac, Abdulai, Jamal-Deen, Atsakpo, Elikem Doe, Ayoka, Gifty, Cave, Richard, Ekpezu, Akon Obu, Holloway, Catherine, Tomanek, Katrin, Winful, Fiifi Baffoe Payin

arXiv.org Artificial Intelligence

This study presents an approach for collecting speech samples to build Automatic Speech Recognition (ASR) models for impaired speech, particularly, low-resource languages. It aims to democratize ASR technology and data collection by developing a "cookbook" of best practices and training for community-driven data collection and ASR model building. As a proof-of-concept, this study curated the first open-source dataset of impaired speech in Akan: a widely spoken indigenous language in Ghana. The study involved participants from diverse backgrounds with speech impairments. The resulting dataset, along with the cookbook and open-source tools, are publicly available to enable researchers and practitioners to create inclusive ASR technologies tailored to the unique needs of speech impaired individuals. In addition, this study presents the initial results of fine-tuning open-source ASR models to better recognize impaired speech in Akan.


Vernacular? I Barely Know Her: Challenges with Style Control and Stereotyping

Aich, Ankit, Liu, Tingting, Giorgi, Salvatore, Isman, Kelsey, Ungar, Lyle, Curtis, Brenda

arXiv.org Artificial Intelligence

Large Language Models (LLMs) are increasingly being used in educational and learning applications. Research has demonstrated that controlling for style, to fit the needs of the learner, fosters increased understanding, promotes inclusion, and helps with knowledge distillation. To understand the capabilities and limitations of contemporary LLMs in style control, we evaluated five state-of-the-art models: GPT-3.5, GPT-4, GPT-4o, Llama-3, and Mistral-instruct-7B across two style control tasks. We observed significant inconsistencies in the first task, with model performances averaging between 5th and 8th grade reading levels for tasks intended for first-graders, and standard deviations up to 27.6. For our second task, we observed a statistically significant improvement in performance from 0.02 to 0.26. However, we find that even without stereotypes in reference texts, LLMs Figure 1: Overall view of this paper. We find that while often generated culturally insensitive content in-context learning can control for reading level and during their tasks. We provide a thorough analysis simplicity, it cannot do the same for vernacular English.